Remote Sensing and Sensors|407 Article(s)
Mapping Research Based on Solid-State LiDAR Fusion with 2D LiDAR
Tianxiang Zhang, Liming Cai, Chuanyun Ouyang, Xiankai Cheng, and Shuhao Yan
To address the issue of incomplete spatial environment information acquisition in traditional two-dimensional (2D) light detection and ranging (LiDAR) mapping, we propose a mapping strategy that leverages the fusion of solid-state LiDAR and 2D LiDAR using the Gmapping algorithm. First, we initiate a planar projection on the solid-state LiDAR point cloud data. Subsequently, the resultant laser data are combined with the optimal particle trajectory within the Gmapping algorithm to construct a grid map. This grid map is then integrated with the grid map carried by the optimal particle, resulting in a fused map designed to identify spatial obstacles. To enhance mapping accuracy, we employe an extended Kalman filter for the dynamic fusion of weights associated with the wheel odometer, laser odometer, and inertial measurement unit. This approach addresses the challenges posed by reduced fusion odometer accuracy in scenarios involving factors such as slippage or feature-matching failures of the laser odometer in environments with limited features. Subsequently, we conducte testing experiments on the fused map and the fusion mileage calculation method. The experimental outcomes demonstrate that the fused map effectively identifies spatial obstacles and the fused odometer exhibits an average positioning accuracy improvement of 17.0% compared to traditional methods.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0828006 (2024)
The Study of Characteristics and Imaging of Infrared Radiation of Middle and Upper Atmosphere Background
Jie Shi, Bo Shi, Feng Jin, Shilin Xing, and Weilong Gou
This study proposes an infrared imaging and detection model to address infrared imaging and radiation characteristics of the middle and upper atmospheric background. The applicability of the medium spectral resolution atmospheric radiation transfer mode (MODTRAN) in the infrared band is analyzed, and the strategic high-altitude radiance code (SHARC) is utilized to simulate and analyze the infrared radiation characteristics of the middle and upper atmospheric background under different observation parameters in the 3?5 μm band. Furthermore, a relevant radiation characteristic database is established to complete imaging simulation for infrared radiation scenes in the middle and upper atmospheric background. The results demonstrate that the MODTRAN in the 3?5 and 8?12 μm bands has good computational accuracy at tangent heights below 50 and 70 km, respectively; middle and upper atmospheric background radiance decreases with the increase in the tangent height and solar zenith angle, however, this increases with the observed zenith angle increase; short and long path radiation characteristics are primarily influenced by the path length and atmospheric parameters in the lower atmosphere, respectively; the radiance during the day and night reaches its global maximum at 36 and 34 km, respectively, and local maximum at 75 and 85 km. The results provide theoretical support for the infrared detection of middle and upper atmospheric backgrounds.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0828005 (2024)
Target Localization and Tracking Method Based on Camera and LiDAR Fusion
Pu Zhang, Jinqing Liu, Jinchao Xiao, Junfeng Xiong, Tianwei Feng, and Zhongze Wang
Environmental perception is a key technology for unmanned driving. However, cameras often lack depth information to locate and detect targets and have poor tracking accuracy; therefore, a target localization and tracking algorithm based on the fusion of camera and LiDAR technologies is proposed. This algorithm obtains the positioning information of the detected target by measuring the proportion of the area of the LiDAR point cloud cluster in the pixel plane within the image detection frame. Subsequently, based on the horizontal and vertical movement speeds of the detected target's contour point cloud in the pixel coordinate system, the center coordinate of the image detection frame is fused to improve the target tracking accuracy. The experimental results show that the accuracy of the proposed target localization algorithm is 88.5417%, and the average processing time per frame is only 0.03 s, meeting real-time requirements. The average error of the horizontal axis of the image detection frame center is 4.49 pixel, the average error of the vertical axis is 1.80 pixel, and the average area overlap rate is 87.42%.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0828004 (2024)
Lightweight Bilateral Input D-WNet Aerial Image Building Change Detection
Fengxing Zhang, Jian Huang, and Hao Li
A lightweight dual-input change detection network, D-WNet, is proposed to address the issues of traditional semantic segmentation networks being susceptible to interference from shadows and other ground objects, as well as the rough boundary segmentation of buildings. The new network starts with W-Net and uses deep separable convolutional blocks and hollow space pyramid pooling modules to replace the originally cumbersome convolutional and downsampling processes. It utilizes a right-line feature encoder to enhance the fusion of high-dimensional and high-dimensional features and introduces channels and spatiotemporal attention mechanisms in the sampling section of the decoder to obtain effective features of the network in different dimensions. The resulting D-WNet has significantly improved performance. Experiments were conducted on the publicly available WHU and LEVIR-CD building change detection datasets, and the results were compared with the W-Net, U-Net, ResNet, SENet, and DeepLabv3+ semantic segmentation networks. The experimental results show that D-WNet performs well in five indicators (intersection-to-intersection ratio, F1 value, recall rate, accuracy rate, and running time) and has more accurate change detection results for shadow interference and building edge areas.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0828003 (2024)
Remote Sensing Image Fusion Based on Particle Swarm Optimization and Adaptive Injection Model
Shize Li, and Yan Dong
To address issues, such as loss of spectral and spatial detail as well as unclear fusion results during the fusion process, a fusion method based on particle swarm optimization is proposed. The initial step of this method involves preprocessing the original image to derive edge detection matrices for each of the image's channels. Subsequently, the spectral coverage coefficient is determined by employing the least square method to generate a more precise image. Finally, an adaptive injection model framework is proposed, which incorporates a weighted matrix, particle swarm optimization, and error relative global accuracy (ERGAS) index function to optimize the weights for edge detection. The band weights in the dataset are calculated to generate the final fused image. In this study, the performance of five fusion methods is assessed using three remote sensing satellite images of varying resolution (WorldView-2, GF-2, and GeoEye) by quantitatively analyzing six evaluation indicators. The results indicate that the method proposed in this paper outperforms other methods in terms of subjective visual effects and objective quantitative evaluation indicators such as average gradient and spatial frequency. Furthermore, the proposed method realizes a good fusion effect in retaining spectral and spatial information.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0828002 (2024)
Classification Method of Remote Sensing Image Based on Dynamic Weight Transform and Dual Network Self Verification
Qingfang Zhang, Ming Cong, Ling Han, Jiangbo Xi, Qingqing Jing, Jianjun Cui, Chengsheng Yang, Chaofeng Ren, Junkai Gu, Miaozhong Xu, and Yiting Tao
Currently, popular neural networks not only struggle to accurately recognize various types of surface targets but also tend to introduce significant noise and errors when handling limited samples and weak supervision. Therefore, this study proposes a dual-network remote sensing image classification method based on dynamic weight deformation, after analyzing the features of remote sensing images. By constructing a flexible, simple, and effective weight dynamic deformation structure, we establish an improved classification network and target recognition network. This introduces the self-verification ability of dual network comparison, thereby enhancing learning performance, error correction, recognition efficiency, supplementing omissions, and improving classification accuracy. Experimental comparisons show that the proposed method is easy to implement and exhibits stronger cognitive ability and noise resistance. It confirms the adaptability of the proposed method to various remote sensing image classification tasks and its vast application potential.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0828001 (2024)
Improved Lidar Odometer Based on Motion Prediction
Zheng Qin, Xiangchuan Gao, Zhengkang Chen, Yifan Lu, and Lingbo Qu
To address the lidar odometer output trajectory drift problem for a wide range of outdoor building map scenes, a continuous motion prediction algorithm based on the normal distribution transformation is proposed to improve the estimation accuracy of the initial value of point cloud matching under the condition that only lidar is used to construct the odometer. Frame and local map matching is used instead of inter-frame matching. The drift of the motion trajectory is then effectively suppressed. The simulation results are verified by two different scenarios of the Kitti dataset. The improved lidar odometer algorithm reduces the global average errors of two trajectories by 27.93% and 36.66%, while the maximum Z-axis deviation of the two trajectories is reduced by 70.29% and 82.52%. The improved lidar odometer can stably and effectively suppress the motion trajectory drift.
Laser & Optoelectronics Progress
  • Publication Date: Mar. 10, 2024
  • Vol. 61, Issue 5, 0528002 (2024)
Large Relative Aperture Receiving Optical System and Stray Light Suppression for Laser Ranging
Xingyu Zhou, Liang Sun, Qiao Pan, Shaolong Wu, Guoyang Cao, and Xiaofeng Li
To solve existing difficulties in the current laser ranging receiving optical system, that is, the system has sufficient light input while ensuring its low manufacturing cost and light miniaturization, a large relative aperture receiving optical system is analyzed and designed based on the Lagrange invariant theory. The proposed system uses a six-piece Galilean standard spherical mirror, and an avalanche photodiode (APD) with a photosensitive surface diameter of 0.5 mm, to detect an echo light signal. The optimized system consists of an entrance pupil diameter of 50 mm, a relative aperture of 1∶0.9, a total system length of 114.9 mm, and a Lagrange invariant of 125 mrad·mm, which not only obtains sufficient light input, but also addresses low-cost miniaturization requirements. To reduce the stray light influence, the lens hood and barrel fence structure are further designed. Stray light simulation tracking results indicate that the stray light suppression outside the 20°‒85° off axis angle field of view meets the laser ranging system requirements. Under the premise of receiving an optical system with a Lagrange invariant of 125 mrad·mm, the ranging range comprehensively increased by 21 times after adding the optical system. This confirms that the receiving optical system has significant application value.
Laser & Optoelectronics Progress
  • Publication Date: Mar. 10, 2024
  • Vol. 61, Issue 5, 0528001 (2024)
Deep Learning Cloud Detection Based on Regression Analysis of Temporal Data
Yanan Tian, Yunling Li, Lin Sun, Shulin Pang, and Ping Zhang
The transmission of satellite data is adversely affected by cloud cover; therefore, precise cloud detection plays an important role in recognizing remote sensing image targets and quantitatively inverting parameters. This study addresses the challenges of accurately identifying bright surfaces, thin clouds, broken clouds, and cloud boundaries and the stability of cloud detection accuracy across different scale features. We calculate linear regression on short-term time series datasets, using the slope-change trend of apparent reflectance of front and back time series datasets as the input. To fully leverage information from different scales, we employ the UNet++ model for cloud detection, which boasts a unique dense skip structure and deep supervision structure. Compared with U-Net, SegNet, and UNet++ of the single-temporal dataset, our proposed method can effectively highlight multiscale features and increase the sensitivity for bright surfaces, cloud-boundary contour, and thin-cloud information. Our results demonstrate that the proposed method achieves a high accuracy of 98.21% in cloud detection, and the false detection and missing detection rates are reduced to 1.07% and 3.12%, respectively. Furthermore, our method effectively reduces the interference of bright surfaces on cloud identification, such as barren lands, roads, buildings, ice, and snow, while improving thin-cloud identification accuracy. Therefore, our proposed method is suitable for remote sensing images of different underlying surfaces.
Laser & Optoelectronics Progress
  • Publication Date: Feb. 25, 2024
  • Vol. 61, Issue 4, 0428013 (2024)
Algorithm for Cloud Removal from Optical Remote Sensing Images Based on the Mechanism of Fusion and Refinement
Xiaoyu Wang, Yuhang Liu, and Yan Zhang
Optical images obtained through remote sensing are widely used in weather forecasting, environmental monitoring, and marine supervision. However, the images captured by optical sensors are adversely affected by the atmospheric conditions and weather; cloud covering also leads to content loss, contrast reduction, and color distortion of the images. In this paper, a cloud removal algorithm for optical remote sensing images is proposed. The algorithm is based on the mechanism of fusion and refinement and is designed to achieve high quality cloud removal for a single remote sensing image. A cloud removal network, based on the mechanism of fusion and refinement, implements a transform from cloudy images to cloud-free images. A multiscale, cloud feature fusion pyramid with a fusion mechanism extracts and fuses the cloud features from different space scales. A multiscale, cloud-edge feature refinement unit with a refinement mechanism refines the edge features of the cloud and reconstructs the clear, cloud-free image. This paper adopts an adversarial learning strategy. The discriminator network adaptively corrects the features and separates out the cloud features for more accurate discrimination, and makes the network generate realistic cloud removal results. The experiments were conducted on an open-source dataset and the results were compared with those of five competing algorithms. A qualitative analysis of the experimental results shows that the proposed algorithm performs better than the other five and removes the cloud without color distortion and artifacts. Further, structural similarity and peak signal-to-noise ratio of the proposed algorithm exceeds those of the second-placed algorithm by 11.9% and 15.0%, respectively, on a thin cloud test set, and by 9.3% and 9.9%, respectively, on a heavy cloud test set.
Laser & Optoelectronics Progress
  • Publication Date: Feb. 25, 2024
  • Vol. 61, Issue 4, 0428012 (2024)